Kalman Filter example

Example 5 – Estimating the height of a building

Assume that we would like to estimate the height of a building using an imprecise altimeter. We know that building height doesn’t change over time, at least during the short measurement process.

The numerical example

• The true building height is 50 meters.
• The altimeter measurement error (standard deviation) is 5 meters.
• The ten measurements are: 49.03m, 48.44m, 55.21m, 49.98m, 50.6m, 52.61m, 45.87m, 42.64m, 48.26m, 55.84m

# Everything in meters
trueVal <- 50
sigman <- 5 # standard deviation
rn <- sigman**2 # the measurement uncertainty 
zn <- c(49.03, 48.44, 55.21, 49.98, 50.6, 52.61, 45.87, 42.64, 48.26, 55.84)

Iteration Zero

Initialization

# Simply by looking at it
x00 <- 60

#  A human estimation error (standard deviation)
sigma <- 15
p00 <- sigma**2

Prediction

# Since our system’s Dynamic Model is constant, the building’s height doesn’t change
x10 = x00

# The extrapolated estimate variance also doesn’t change
p10 = p00

First Iteration

Step 1 - Measure

\(z_{1}\) = 49.03m and \(r_{2}\) = 25.

Step 2 - Update

# Kalman Gain
k1 <- p10 / (p10 + rn); k1
## [1] 0.9
# Estimating the current state
x11 <- x10 + k1 * (zn[1] - x10); x11
## [1] 50.127
# Update the current estimate variance
p11 <- (1 - k1) * p10; p11
## [1] 22.5

Step 3 - Predict

# Since our system’s Dynamic Model is constant, the building’s height doesn’t change
x21 = x11; x21
## [1] 50.127
# The extrapolated estimate variance also doesn’t change
p21 = p11; p21
## [1] 22.5

Second Iteration

Step 1 - Measure

\(z_{2}\) = 48.44m and \(r_{2}\) = 25.

Step 2 - Update

# Kalman Gain
k2 <- p21 / (p21 + rn); k2
## [1] 0.4736842
# Estimating the current state
x22 <- x21 + k2 * (zn[2] - x21); x22
## [1] 49.32789
# Update the current estimate variance
p22 <- (1 - k2) * p21; p22
## [1] 11.84211

Step 3 - Predict

# Since our system’s Dynamic Model is constant, the building’s height doesn’t change
x31 = x22; x31
## [1] 49.32789
# The extrapolated estimate variance also doesn’t change
p31 = p22; p31
## [1] 11.84211

Generalize

We create a function to generalize the method and use it in the future easier.

sigman <- 5 
zn <- c(0, 49.03, 48.44, 55.21, 49.98, 50.6, 52.61, 45.87, 42.64, 48.26, 55.84)
q <- 0

KF <- function(sigmaIn, initVal, sigma, zn, q){
  # Create the empty vectors
  x <- matrix(nrow = 12, ncol = 11)
  rownames(x) <- c(0:11)
  colnames(x) <- c(0:10)
  
  p <- matrix(nrow = 12, ncol = 11)
  rownames(p) <- c(0:11)
  colnames(p) <- c(0:10)
  
  k <- matrix(nrow = 11, ncol = 1)
  rownames(k) <- c(0:10)
  colnames(k) <- 'K'
  
  rn <- sigma**2
  
  # Step 0
  x[1, 1] <- initVal
  p[1, 1] <- sigmaIn**2
  
  x[2, 1] = x[1, 1]
  p[2, 1] = p[1, 1] + q
  
  
  n <- length(zn)
  for (i in 2:n){
    # Kalman Gain
    k[i] <- p[i, i - 1] / (p[i, i - 1] + rn)
    print(paste0("k ", k[i]))
    
    # Estimating the current state
    x[i, i] <- x[i, i - 1] + k[i] * (zn[i] - x[i, i - 1])
    
    # Update the current estimate variance
    p[i, i] <- (1 - k[i]) * p[i, i - 1]
    
    # Since our system’s Dynamic Model is constant, the building’s height doesn’t change
    x[i + 1, i] = x[i, i]
    print(paste0("x ", x[i + 1, i]))
    
    # The extrapolated estimate variance also doesn’t change
    p[i + 1, i] = p[i, i] + q
    print(paste0("p ", p[i + 1, i]))
  }
  return(list(x, p, k))
}

result <- KF(sigma, x00, sigman, zn, q)
## [1] "k 0.9"
## [1] "x 50.127"
## [1] "p 22.5"
## [1] "k 0.473684210526316"
## [1] "x 49.3278947368421"
## [1] "p 11.8421052631579"
## [1] "k 0.321428571428571"
## [1] "x 51.2185714285714"
## [1] "p 8.03571428571429"
## [1] "k 0.243243243243243"
## [1] "x 50.9172972972973"
## [1] "p 6.08108108108108"
## [1] "k 0.195652173913043"
## [1] "x 50.8552173913044"
## [1] "p 4.89130434782609"
## [1] "k 0.163636363636364"
## [1] "x 51.1423636363636"
## [1] "p 4.09090909090909"
## [1] "k 0.140625"
## [1] "x 50.4009375"
## [1] "p 3.515625"
## [1] "k 0.123287671232877"
## [1] "x 49.4441095890411"
## [1] "p 3.08219178082192"
## [1] "k 0.109756097560976"
## [1] "x 49.3141463414634"
## [1] "p 2.74390243902439"
## [1] "k 0.0989010989010989"
## [1] "x 49.9595604395604"
## [1] "p 2.47252747252747"
x <- as.matrix(result[[1]])
p <- as.matrix(result[[2]])
k <- as.matrix(result[[3]])

Results analysis

We want to ensure Kalman Filter convergence. The Kalman Gain should gradually decrease until it reaches a steady state. When Kalman Gain is low, the weight of the noisy measurements is also low. The following plot describes the Kalman Gain for the 10 measurements we did in the example above.

plot(1:10, k[2:11], 'l', xlab = 'Measurement number', ylab = 'Kalman Gain', lwd = 2)
title(main = 'Kalman Gain')
grid()

We also want to examine accuracy. Accuracy indicates how close the measurement is to the true value. An estimation error is a difference between the true values (the green line) and the KF estimates (the red line). We can see that the estimation errors of our KF decrease in the filter convergence region.

library(ggplot2)
library(dplyr)
library(plotly)

# Dataframa with all the data
data <- data.frame(
  recta = c(1:10),
  estimates = diag(x)[2:11],
  measurements = zn[2:11],
  true_value = rep(50, 10)
)

# Plot to compare the true value, measured values, and estimates for the first 10 iterations
plot <- ggplot(data, aes(x = recta), show.legend = TRUE) +
  geom_line(aes(y = estimates, color = "Estimates")) +
  geom_point(aes(y = estimates, color = "Estimates", shape = "Estimates")) +
  geom_line(aes(y = measurements, color = "Measurements")) +
  geom_point(aes(y = measurements, color = "Measurements", shape = "Measurements")) +
  geom_line(aes(y = true_value, color = "True values")) +
  geom_point(aes(y = true_value, color = "True values", shape = "True values")) +
  geom_point(aes(x = 1, y = 60, color = "Initialization", shape = "Initialization")) +
  labs(title = "Building height",
       x = "Measurement number",
       y = "Height(m)",
       color = "") +
  
  scale_color_manual(values = c("Estimates" = "red", "Measurements" = "blue", "True values" = "green", "Initialization" = "purple"), name = "") +
  scale_shape_manual(values = c("Estimates" = 19, "Measurements" = 15, "True values" = 18, "Initialization" = 18), name = "")

ggplotly(plot) %>% config(displayModeBar = FALSE)
dataInt <- data.frame(
  recta = c(1:10),
  estimates = diag(x)[2:11],
  true_value = rep(50, 10),
  
  # the 95% confidence interval is given by [ -1.96 sqrt(V); +1.96 sqrt(V)]
  interMax = diag(x)[2:11] + 1.96 * sqrt(diag(p)[2:11]), # p = $\sigma$^2 
  interMin = diag(x)[2:11] - 1.96 * sqrt(diag(p)[2:11])
)

plot <- ggplot(dataInt, aes(x = recta)) +
  geom_line(aes(y = estimates, color = "Estimates")) +
  geom_point(aes(y = estimates, color = "Estimates", shape = "Estimates")) + 
  geom_line(aes(y = true_value, color = "True values")) +
  geom_point(aes(y = true_value, color = "True values", shape = "True values")) +
  geom_line(aes(y = interMax, color = "95% confidence interval", shape = "95% confidence interval"), linetype = 1) +
  geom_line(aes(y = interMin, color = "95% confidence interval", shape = "95% confidence interval"), linetype = 1) +
  geom_ribbon(aes(ymin = interMin, ymax = interMax), fill = "yellow", alpha = 0.2) +
  
  labs(title = "Building height",
       x = "Measurement number",
       y = "Height(m)",
       color = "") +
  
  scale_color_manual(values = c("Estimates" = "red", "True values" = "green", "95% confidence interval" = "yellow"), name = "") +
  scale_shape_manual(values = c("Estimates" = 19, "True values" = 18, "95% confidence interval" = 20), name = "")

ggplotly(plot) %>% config(displayModeBar = FALSE)

We are going to decrease the measurement uncertainty. As we know the standard deviation (\(\sigma\)) of the measurement errors is the measurement uncertainty so we decrease the value of \(\sigma\) to 2:

resultLow <- KF(sigma, x00, 2, zn, q)
## [1] "k 0.982532751091703"
## [1] "x 49.221615720524"
## [1] "p 3.93013100436682"
## [1] "k 0.495594713656388"
## [1] "x 48.8342511013216"
## [1] "p 1.98237885462555"
## [1] "k 0.331369661266569"
## [1] "x 50.9469808541973"
## [1] "p 1.32547864506627"
## [1] "k 0.248893805309735"
## [1] "x 50.7063053097345"
## [1] "p 0.995575221238939"
## [1] "k 0.199291408325952"
## [1] "x 50.685119574845"
## [1] "p 0.797165633303809"
## [1] "k 0.166174298375185"
## [1] "x 51.0049852289513"
## [1] "p 0.664697193500739"
## [1] "k 0.142495250158328"
## [1] "x 50.2732742241925"
## [1] "p 0.569981000633312"
## [1] "k 0.124722838137472"
## [1] "x 49.3212305986696"
## [1] "p 0.498891352549889"
## [1] "k 0.110892065056678"
## [1] "x 49.2035485460818"
## [1] "p 0.443568260226713"
## [1] "k 0.0998225377107365"
## [1] "x 49.866015971606"
## [1] "p 0.399290150842946"
# The Kalman Filter output includes the estimate and the estimate uncertainty
xLow <- as.matrix(resultLow[[1]])
pLow <- as.matrix(resultLow[[2]])
kLow <- as.matrix(resultLow[[3]])

dataIntLow <- data.frame(
  recta = c(1:10),
  estimates = diag(xLow)[2:11],
  true_value = rep(50, 10),
  
  # the 95% confidence interval is given by [ -1.96 sqrt(V); +1.96 sqrt(V)]
  interMax = diag(xLow)[2:11] + 1.96 * sqrt(diag(pLow)[2:11]), # p = $\sigma$^2 
  interMin = diag(xLow)[2:11] - 1.96 * sqrt(diag(pLow)[2:11])
)

plot <- ggplot(dataIntLow, aes(x = recta)) +
  geom_line(aes(y = estimates, color = "Estimates")) +
  geom_point(aes(y = estimates, color = "Estimates", shape = "Estimates")) + 
  geom_line(aes(y = true_value, color = "True values")) +
  geom_point(aes(y = true_value, color = "True values", shape = "True values")) +
  geom_line(aes(y = interMax, color = "95% confidence interval", shape = "95% confidence interval"), linetype = 1) +
  geom_line(aes(y = interMin, color = "95% confidence interval", shape = "95% confidence interval"), linetype = 1) +
  geom_ribbon(aes(ymin = interMin, ymax = interMax), fill = "yellow", alpha = 0.2) +
  
  labs(title = "Building height",
       x = "Measurement number",
       y = "Height(m)",
       color = "") +
  
  scale_color_manual(values = c("Estimates" = "red", "True values" = "green", "95% confidence interval" = "yellow"), name = "") +
  scale_shape_manual(values = c("Estimates" = 19, "True values" = 18, "95% confidence interval" = 20), name = "")

ggplotly(plot) %>% config(displayModeBar = FALSE)

Example 6 – Estimating the temperature of the liquid in a tank

We can describe the system dynamics by the following equation:
\(x_{n}\) = \(T\) + \(w_{n}\) (5.2)
Where:
\(T\) is the constant temperature
\(w_{n}\) is a random process noise with variance q

The numerical example

# Let us assume a true temperature of 50 degrees Celsius
trueTemp <- 50

# The process noise variance (q) to 0.0001
q <- 0.0001

# The measurement error (standard deviation) is 0.1 degrees Celsius
sigma <- 0.1

# The measurements are taken every 5 seconds.

# The true liquid temperature values at the measurement points are: 
trueValues <- c(50.005, 49.994, 49.993, 50.001, 50.006, 49.998, 50.021, 50.005, 50, 49.997)

# The measurements are: 
measurements <- c(49.986, 49.963, 50.09, 50.001, 50.018, 50.05, 49.938, 49.858, 49.965, 50.114)
data2 <- data.frame(
  recta = c(1:10),
  measurements = measurements,
  trueValues = trueValues
)

# The following plot compares the true liquid temperature and the measurements
plot <- ggplot(data2, aes(x = recta)) + 
  geom_line(aes(y = measurements, color = "Measurements"), size = 1) + 
  geom_point(aes(y = measurements, color = "Measurements", shape = "Measurements"), size = 2) +
  geom_line(aes(y = trueValues, color = "True values"), size = 1) + 
  geom_point(aes(y = trueValues, color = "True values", shape = "True values"), size = 2) +
  scale_color_manual(values = c("Measurements" = "blue", "True values" = "green"), name = "") +
  scale_shape_manual(values = c("Measurements" = 15, "True values" = 18), name = "") +
  labs(color = "",
       title = "The Liquid Temperature",
       x = "Measurement number",
       y = "Temperature(Cº)") +
  theme(legend.position = "right")

ggplotly(plot) %>% config(displayModeBar = FALSE)

Iteration Zero

Initialization

x00 <- 60

# Initialization estimate error $\sigma$
sigmaIn <- 100 

# The Estimate Variance of the initialization is the error variance ($\sigma$^2)
p00 <- sigmaIn**2

This variance is very high. We get faster Kalman Filter convergence if we initialize with a more meaningful value.

Prediction

# Since our model has constant dynamics, the predicted estimate is equal to the current estimate:
x10 <- x00

# The extrapolated estimate variance:
p10 <- p00 + q; p10
## [1] 10000
zn <- c(0, measurements)
result <- KF(sigmaIn, x00, sigma, zn, q)
## [1] "k 0.99999900000101"
## [1] "x 49.9860100139899"
## [1] "p 0.0100999900005886"
## [1] "k 0.502487314684875"
## [1] "x 49.9744477738492"
## [1] "p 0.00512487314684875"
## [1] "k 0.338837430045918"
## [1] "x 50.0136011931943"
## [1] "p 0.00348837430045918"
## [1] "k 0.258620810985385"
## [1] "x 50.010342262391"
## [1] "p 0.00268620810985385"
## [1] "k 0.211742396671498"
## [1] "x 50.0119637301054"
## [1] "p 0.00221742396671498"
## [1] "k 0.181496850134374"
## [1] "x 50.0188671932821"
## [1] "p 0.00191496850134374"
## [1] "k 0.160719560536629"
## [1] "x 50.005870253516"
## [1] "p 0.00170719560536629"
## [1] "k 0.145824470941935"
## [1] "x 49.984307152029"
## [1] "p 0.00155824470941935"
## [1] "k 0.134816725947104"
## [1] "x 49.9817042250051"
## [1] "p 0.00144816725947104"
## [1] "k 0.126497737729415"
## [1] "x 49.9984393412531"
## [1] "p 0.00136497737729415"
x <- as.matrix(result[[1]])
p <- as.matrix(result[[2]])
k <- as.matrix(result[[3]])

When the Kalman Gain is almost 1 the estimate error is much bigger than the measurement error. Thus the weight of the estimate is negligible, while the measurement weight is almost 1.

Results analysis

plot(1:10, k[2:11], 'l', xlab = 'Measurement number', ylab = 'Kalman Gain', lwd = 2)
title(main = 'Kalman Gain')
grid()

As we can see, the Kalman Gain gradually decreases, therefore, the KF converges.

dataInt <- data.frame(
  recta = c(1:10),
  estimates = diag(x)[2:11],
  true_value = trueValues,
  measurements = measurements,
  
  # the 95% confidence interval is given by [ -1.96 sqrt(V); +1.96 sqrt(V)]
  interMax = diag(x)[2:11] + 1.96 * sqrt(diag(p)[2:11]), # p = $\sigma$^2 
  interMin = diag(x)[2:11] - 1.96 * sqrt(diag(p)[2:11])
)

# The following chart compares the true value, measured values, and estimates. The confidence interval is 95%
plot <- ggplot(dataInt, aes(x = recta)) +
  geom_line(aes(y = estimates, color = "Estimates")) +
  geom_point(aes(y = estimates, color = "Estimates", shape = "Estimates")) + 
  geom_line(aes(y = true_value, color = "True values")) +
  geom_point(aes(y = true_value, color = "True values", shape = "True values")) +
  geom_line(aes(y = measurements, color = "Measurements")) +
  geom_point(aes(y = measurements, color = "Measurements", shape = "Measurements")) +
  geom_line(aes(y = interMax, color = "95% confidence interval", shape = "95% confidence interval"), linetype = 1) +
  geom_line(aes(y = interMin, color = "95% confidence interval", shape = "95% confidence interval"), linetype = 1) +
  geom_ribbon(aes(ymin = interMin, ymax = interMax), fill = "yellow", alpha = 0.2) +
  
  labs(title = "The Liquid Temperature",
       x = "Measurement number",
       y = "Temperature ºC",
       color = "") +
  
  scale_color_manual(values = c("Estimates" = "red", "True values" = "green", "Measurements" = "blue", "95% confidence interval" = "yellow"), name = "") +
  scale_shape_manual(values = c("Estimates" = 19, "True values" = 18, "Measurements" = 15, "95% confidence interval" = 20), name = "")

ggplotly(plot) %>% config(displayModeBar = FALSE)

As we can see, the estimated value converges toward the true value. The KF estimates uncertainties are too high for the 95% confidence level.

Example 7 – Estimating the temperature of a heating liquid I

In this case, the dynamic model of the system is not constant - the liquid is heating at a rate of 0.1ºC every second.

The numerical example

The measurements are taken every 5 seconds.
The dynamic model of the system is constant.
Although the true dynamic model of the system is not constant (since the liquid is heating), we treat the system as a system with a constant dynamic model (the temperature doesn’t change).

# We assume that the model is accurate. Thus we set the process noise variance (q) to 0.0001
q <- 0.0001

# The measurement error (standard deviation) is 0.1ºC
sigma <- 0.1

# The true liquid temperature values at the measurement points are:
trueValues2 <- c(50.505, 50.994, 51.493, 52.001, 52.506, 52.998, 53.521, 54.005, 54.5, 54.997)

# The measurements are: 
measurements2 <- c(50.486, 50.963, 51.597, 52.001, 52.518, 53.05, 53.438, 53.858, 54.465, 55.114)
data3 <- data.frame(
  recta = c(1:10),
  measurements = measurements2,
  trueValues = trueValues2
)

# The following chart compares the true liquid temperature and the measurements
plot <- ggplot(data3, aes(x = recta)) + 
  geom_line(aes(y = measurements, color = "Measurements"), size = 1) + 
  geom_point(aes(y = measurements, color = "Measurements", shape = "Measurements"), size = 2) +
  geom_line(aes(y = trueValues, color = "True values"), size = 1) + 
  geom_point(aes(y = trueValues, color = "True values", shape = "True values"), size = 2) +
  scale_color_manual(values = c("Measurements" = "blue", "True values" = "green"), name = "") +
  scale_shape_manual(values = c("Measurements" = 15, "True values" = 18), name = "") +
  labs(color = "",
       title = "The Liquid Temperature",
       x = "Measurement number",
       y = "Temperature(Cº)") +
  theme(legend.position = "right")

ggplotly(plot) %>% config(displayModeBar = FALSE)
initVal <- 10
sigmaIn <- 100
zn <- c(0, measurements2)

result <- KF(sigmaIn, initVal, sigma, zn, q)
## [1] "k 0.99999900000101"
## [1] "x 50.4859595140409"
## [1] "p 0.0100999900005886"
## [1] "k 0.502487314684875"
## [1] "x 50.7256663068264"
## [1] "p 0.00512487314684875"
## [1] "k 0.338837430045918"
## [1] "x 51.0209067761338"
## [1] "p 0.00348837430045918"
## [1] "k 0.258620810985385"
## [1] "x 51.2743792805313"
## [1] "p 0.00268620810985385"
## [1] "k 0.211742396671498"
## [1] "x 51.537706512222"
## [1] "p 0.00221742396671498"
## [1] "k 0.181496850134374"
## [1] "x 51.8121830167324"
## [1] "p 0.00191496850134374"
## [1] "k 0.160719560536629"
## [1] "x 52.0734836077962"
## [1] "p 0.00170719560536629"
## [1] "k 0.145824470941935"
## [1] "x 52.3337097665765"
## [1] "p 0.00155824470941935"
## [1] "k 0.134816725947104"
## [1] "x 52.6210433378897"
## [1] "p 0.00144816725947104"
## [1] "k 0.126497737729415"
## [1] "x 52.9363967159041"
## [1] "p 0.00136497737729415"
x <- as.matrix(result[[1]])
p <- as.matrix(result[[2]])
k <- as.matrix(result[[3]])

Results analysis

dataInt <- data.frame(
  recta = c(1:10),
  estimates = diag(x)[2:11],
  true_value = trueValues2,
  measurements = measurements2,
  
  # the 95% confidence interval is given by [ -1.96 sqrt(V); +1.96 sqrt(V)]
  interMax = diag(x)[2:11] + 1.96 * sqrt(diag(p)[2:11]), # p = $\sigma$^2 
  interMin = diag(x)[2:11] - 1.96 * sqrt(diag(p)[2:11])
)

# The following chart compares the true value, measured values, and estimates. The confidence interval is 95%
plot <- ggplot(dataInt, aes(x = recta)) +
  geom_line(aes(y = estimates, color = "Estimates")) +
  geom_point(aes(y = estimates, color = "Estimates", shape = "Estimates")) + 
  geom_line(aes(y = true_value, color = "True values")) +
  geom_point(aes(y = true_value, color = "True values", shape = "True values")) +
  geom_line(aes(y = measurements, color = "Measurements")) +
  geom_point(aes(y = measurements, color = "Measurements", shape = "Measurements")) +
  geom_line(aes(y = interMax, color = "95% confidence interval", shape = "95% confidence interval"), linetype = 1) +
  geom_line(aes(y = interMin, color = "95% confidence interval", shape = "95% confidence interval"), linetype = 1) +
  geom_ribbon(aes(ymin = interMin, ymax = interMax), fill = "yellow", alpha = 0.2) +
  
  labs(title = "The Liquid Temperature",
       x = "Measurement number",
       y = "Temperature ºC",
       color = "") +
  
  scale_color_manual(values = c("Estimates" = "red", "True values" = "green", "Measurements" = "blue", "95% confidence interval" = "yellow"), name = "") +
  scale_shape_manual(values = c("Estimates" = 19, "True values" = 18, "Measurements" = 15, "95% confidence interval" = 20), name = "")

ggplotly(plot) %>% config(displayModeBar = FALSE)

As we can see, the Kalman Filter has failed to provide a reliable estimation. There is a lag error in the Kalman Filter estimation.
There are two reasons for the lag error in our Kalman Filter example:
• The dynamic model doesn’t fit the case.
• We have chosen very low process noise (q = 0.0001) while the true temperature fluctuations are much more significant.
The lag error is constant. Therefore the estimate curve should have the same slope as the true value curve.

There are two possible ways to fix the lag error:

• If we know that the liquid temperature can change linearly, we can define a new model that considers a possible linear change in the liquid temperature. This method is preferred. However, this method won’t improve the Kalman Filter performance if the temperature change can’t be modeled.

• On the other hand, since our model is not well defined, we can adjust the process model reliability by increasing the process noise (q).

So, the wrong dynamic model and process model definitions cause the lag error.

Another problem is a low estimate uncertainty. The KF failed to provide accurate estimates and is also confident in its estimates. It is an example of a bad KF design.

Example 8 – Estimating the temperature of a heating liquid II

Since our process is not well-defined, we increase the process variance (q) from 0.0001 to 0.15.

# We assume that the model is accurate. Thus we set the process noise variance (q) to 0.0001
q <- 0.15

# The measurement error (standard deviation) is 0.1ºC
sigma <- 0.1

# The true liquid temperature values at the measurement points are:
trueValues2 <- c(50.505, 50.994, 51.493, 52.001, 52.506, 52.998, 53.521, 54.005, 54.5, 54.997)

# The measurements are: 
measurements2 <- c(50.486, 50.963, 51.597, 52.001, 52.518, 53.05, 53.438, 53.858, 54.465, 55.114)
initVal <- 10
sigmaIn <- 100
zn <- c(0, measurements2)

result <- KF(sigmaIn, initVal, sigma, zn, q)
## [1] "k 0.999999000016"
## [1] "x 50.4859595146478"
## [1] "p 0.159999990000317"
## [1] "k 0.941176467128137"
## [1] "x 50.9349387933287"
## [1] "p 0.159411764671281"
## [1] "k 0.940972222210166"
## [1] "x 51.5579199982093"
## [1] "p 0.159409722222102"
## [1] "k 0.9409715105554"
## [1] "x 51.9748456567912"
## [1] "p 0.159409715105554"
## [1] "k 0.940971508075736"
## [1] "x 52.4859384182383"
## [1] "p 0.159409715080757"
## [1] "k 0.940971508067096"
## [1] "x 53.0167042954713"
## [1] "p 0.159409715080671"
## [1] "k 0.940971508067066"
## [1] "x 53.4131315499039"
## [1] "p 0.159409715080671"
## [1] "k 0.940971508067066"
## [1] "x 53.8317400862823"
## [1] "p 0.159409715080671"
## [1] "k 0.940971508067066"
## [1] "x 54.4276196222917"
## [1] "p 0.159409715080671"
## [1] "k 0.940971508067066"
## [1] "x 55.0734840014115"
## [1] "p 0.159409715080671"
x <- as.matrix(result[[1]])
p <- as.matrix(result[[2]])
k <- as.matrix(result[[3]])

Results analysis

dataInt <- data.frame(
  recta = c(1:10),
  estimates = diag(x)[2:11],
  true_value = trueValues2,
  measurements = measurements2,
  
  # the 95% confidence interval is given by [ -1.96 sqrt(V); +1.96 sqrt(V)]
  interMax = diag(x)[2:11] + 1.96 * sqrt(diag(p)[2:11]), # p = $\sigma$^2 
  interMin = diag(x)[2:11] - 1.96 * sqrt(diag(p)[2:11])
)

# The following chart compares the true value, measured values, and estimates. The confidence interval is 95%
# There is no lag error!
plot <- ggplot(dataInt, aes(x = recta)) +
  geom_line(aes(y = estimates, color = "Estimates")) +
  geom_point(aes(y = estimates, color = "Estimates", shape = "Estimates")) + 
  geom_line(aes(y = true_value, color = "True values")) +
  geom_point(aes(y = true_value, color = "True values", shape = "True values")) +
  geom_line(aes(y = measurements, color = "Measurements")) +
  geom_point(aes(y = measurements, color = "Measurements", shape = "Measurements")) +
  geom_line(aes(y = interMax, color = "95% confidence interval", shape = "95% confidence interval"), linetype = 1) +
  geom_line(aes(y = interMin, color = "95% confidence interval", shape = "95% confidence interval"), linetype = 1) +
  geom_ribbon(aes(ymin = interMin, ymax = interMax), fill = "yellow", alpha = 0.2) +
  
  labs(title = "The Liquid Temperature",
       x = "Measurement number",
       y = "Temperature ºC",
       color = "") +
  
  scale_color_manual(values = c("Estimates" = "red", "True values" = "green", "Measurements" = "blue", "95% confidence interval" = "yellow"), name = "") +
  scale_shape_manual(values = c("Estimates" = 19, "True values" = 18, "Measurements" = 15, "95% confidence interval" = 20), name = "")

ggplotly(plot) %>% config(displayModeBar = FALSE)
plot(1:10, k[2:11], 'l', xlab = 'Measurement number', ylab = 'Kalman Gain', lwd = 2)
title(main = 'Kalman Gain')
grid()

Due to the high process uncertainty, the measurement weight is much higher than the weight of the estimate. Thus, the Kalman Gain is high, and it converges to 0.94. The good news is that we can trust the estimates of this KF. The true values (the green line) are within the 95% confidence region.